From 297e74abca4731be77801e1e725347baa5b38781 Mon Sep 17 00:00:00 2001 From: Wei Chen Date: Wed, 5 Apr 2017 17:09:18 +0800 Subject: [PATCH] xen/arm: Introduce a macro to synchronize SError In previous patches, we have provided the ability to synchronize SErrors in exception entries. But we haven't synchronized SErrors while returning to guest and doing context switch. So we still have two risks: 1. Slipping hypervisor SErrors to guest. For example, hypervisor triggers a SError while returning to guest, but this SError may be delivered after entering guest. In "DIVERSE" option, this SError would be routed back to guest and panic the guest. But actually, we should crash the whole system due to this hypervisor SError. 2. Slipping previous guest SErrors to the next guest. In "FORWARD" option, if hypervisor triggers a SError while context switching. This SError may be delivered after switching to next vCPU. In this case, this SError will be forwarded to next vCPU and may panic an incorrect guest. So we have have to introduce this macro to synchronize SErrors while returning to guest and doing context switch. In this macro, we use ASSERT to make sure the abort is ummasked. Because we unmasked abort in the entries, but we don't know whether someone will mask it in the future. We also added a barrier to this macro to prevent compiler reorder our asm volatile code. Signed-off-by: Wei Chen Signed-off-by: Stefano Stabellini Reviewed-by: Stefano Stabellini --- xen/include/asm-arm/processor.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h index bb24bee94e..855ded1b07 100644 --- a/xen/include/asm-arm/processor.h +++ b/xen/include/asm-arm/processor.h @@ -723,6 +723,18 @@ void abort_guest_exit_end(void); ( (unsigned long)abort_guest_exit_end == (r)->pc ) \ ) +/* + * Synchronize SError unless the feature is selected. + * This is relying on the SErrors are currently unmasked. + */ +#define SYNCHRONIZE_SERROR(feat) \ + do { \ + ASSERT(!cpus_have_cap(feat) || local_abort_is_enabled()); \ + asm volatile(ALTERNATIVE("dsb sy; isb", \ + "nop; nop", feat) \ + : : : "memory"); \ + } while (0) + #endif /* __ASSEMBLY__ */ #endif /* __ASM_ARM_PROCESSOR_H */ /* -- 2.30.2